Intent Inferencer-Planner Interactions
نویسندگان
چکیده
Interaction between automated planners and intent inferencers is a difficult problem that is not always well understood. This problem can be framed in terms of a number of key issues: shared ontology, human interaction, and display of proposals. Shared ontology is a key issue from both practical and theoretical perspectives. Without sharing, intent inference outputs must be interpreted by the planner into its own representations, and, of course, the intent inferencer must do the same with the planner outputs. Furthermore, absent a commitment to a shared ontology, it is unclear practically whether there is any commitment between the design teams to interact in any way whatsoever that would support true collaborative interaction between the human operator and the intelligent system. Committing to a shared ontology is no panacea. Ontologies are hard, and shared ontologies are even harder. Given that knowledge based processing is often embedded within elements of the ontology, the representational needs of planning and intent inference place considerable demands on the shared ontology. Interaction with a human operator is a difficult problem generally in software and especially so for planners. One of the most difficult problems for the planner is that the human operator is nearly always correct. Consequently, the plans that the planner wants to pursue may not be the plans of the human operator. This has several implications. First, the planner must follow the human’s lead, which is, at first glance, a modification to the very essence of planning: selecting courses of action. In other words, the planner will have to activate plans that it has already decided are not the best course of action. This may mean that the planner would be required to override its own knowledge to follow the human’s lead. All of the preceding assumes that the planner is truly going to follow this lead. Another possibility is that the human has chosen an erroneous plan. If the planner has any interest in intervening, it would need to differentiate between adequate choices and erroneous choices (a design choice that seems to be infrequently chosen). A third possibility, and not necessarily a bad one, is to follow the operator’s lead silently. If the planner were following the lead of another automated planner, a request for rationale could be appropriate. This request might well be inappropriate for a human operator under circumstances in which intelligent systems are currently being deployed. A final difficulty with human operator is individual differences, in which human operators choose make choices for strictly preference reasons. It would be poor for the planner to repeatedly suggest option A when the human always chooses option B. Display of proposals is another key issue and another opportunity to solve problems that are difficult to address in the preceding issues. The need for and availability of rationale may be as important as the proposed plan. In situations where sufficient time is readily available, the human operator will likely want to examine the rationale that supports the plan. Rationale is clearly related to the plan itself but is something the planner’s designers may not have considered to be important. In many operational contexts, the operator has insufficient time for a comprehensive review, and thus the plan itself is displayed. The human operator’s visualization of the proposed plan, supported by human pattern recognition, and trust (or lack of trust) in automation will likely determine whether the plan is
منابع مشابه
Requirements for Inferring the Intensions of Multitasking Reactive Agents
Multitasking complicates the problem of intent inference by requiring intent inference mechanisms to distinguish multiple streams of behavior and recognize overt task coordination actions.. Given knowledge of an agent’s problem representation and architecture it may be possible to disambiguate the agent’s actions and better infer its intent. The requirements for implementing such a system are e...
متن کاملParametric Type Inferencing for Helium
Helium is a compiler for a large subset of Haskell under development at Universiteit Utrecht. A major design criterion is the ability to give superb error messages. This is especially needful for novice functional programmers. In this paper we document the implementation of the Helium type inferencer. For purposes of experimentation with various methods of type inferencing, the type inferencer ...
متن کاملRecognition, Prediction, and Planning for Assisted Teleoperation of Freeform Tasks
This paper presents a system for improving the intuitiveness and responsiveness of assisted robot teleoperation interfaces by combining intent prediction and motion planning. Two technical contributions are described. First, an intent predictor estimates the user’s desired task, and accepts freeform tasks that include both discrete types and continuous parameters (e.g., desired target positions...
متن کاملAir Traffic Controller Team Intent Inference
This paper describes methods and applications of intent inference for future teams of air traffic controllers that include a strategic planning controller responsible for ‘conditioning’ the traffic flow. The Crew Activity Tracking System (CATS) provides a framework for developing intent-aware intelligent agents to support controller teams. A proof-of-concept system provides reminders to the pla...
متن کاملAdaptive Music Recommendation System by Fan Yang
While sources of digital music are getting more abundant and music players are becoming increasingly feature-rich, we still struggle to find new music that we may like. This thesis explores the design and implementation of the MusicPlanner a music recommendation application that utilizes a goal-oriented framework to recommend and play music. Goal-oriented programming approaches problems by mode...
متن کامل